Extended Successive Convex Approximation for Phase Retrieval With Dictionary Learning
نویسندگان
چکیده
Phase retrieval aims at recovering unknown signals from magnitude measurements of linear mixtures. In this paper, we consider the phase with dictionary learning problem, which includes another prior information that signal admits a sparse representation over an dictionary. The task is to jointly estimate and magnitude-only measurements. To end, study two complementary formulations develop efficient parallel algorithms by extending successive convex approximation framework using smooth majorization. first algorithm termed compact-SCAphase preferable in case moderately diverse mixture models low number mixing components. It adopts compact formulation avoids auxiliary variables. proposed highly scalable has reduced parameter tuning cost. second algorithm, referred as xmlns:xlink="http://www.w3.org/1999/xlink">SCAphase , uses variables favorable models. also renders simple incorporation additional side constraints. performance both methods evaluated when applied blind channel estimation subband multi-antenna random access network. Simulation results show efficiency techniques compared state-of-the-art methods.
منابع مشابه
DOLPHIn - Dictionary Learning for Phase Retrieval
We propose a new algorithm to learn a dictionary for reconstructing and sparsely encoding signals from measurements without phase. Specifically, we consider the task of estimating a two-dimensional image from squared-magnitude measurements of a complex-valued linear transformation of the original image. Several recent phase retrieval algorithms exploit underlying sparsity of the unknown signal ...
متن کاملStochastic Successive Convex Approximation for Non-Convex Constrained Stochastic Optimization
This paper proposes a constrained stochastic successive convex approximation (CSSCA) algorithm to find a stationary point for a general non-convex stochastic optimization problem, whose objective and constraint functions are nonconvex and involve expectations over random states. The existing methods for non-convex stochastic optimization, such as the stochastic (average) gradient and stochastic...
متن کاملParallel Successive Convex Approximation for Nonsmooth Nonconvex Optimization
Consider the problem of minimizing the sum of a smooth (possibly non-convex) and a convex (possibly nonsmooth) function involving a large number of variables. A popular approach to solve this problem is the block coordinate descent (BCD) method whereby at each iteration only one variable block is updated while the remaining variables are held fixed. With the recent advances in the developments ...
متن کاملPhase Retrieval Meets Statistical Learning Theory: A Flexible Convex Relaxation
We propose a flexible convex relaxation for the phase retrieval problem that operates in the natural domain of the signal. Therefore, we avoid the prohibitive computational cost associated with “lifting” and semidefinite programming (SDP) in methods such as PhaseLift and compete with recently developed non-convex techniques for phase retrieval. We relax the quadratic equations for phaseless mea...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEEE Transactions on Signal Processing
سال: 2022
ISSN: ['1053-587X', '1941-0476']
DOI: https://doi.org/10.1109/tsp.2022.3233253